24 research outputs found
Message-Passing Algorithms for Quadratic Minimization
Gaussian belief propagation (GaBP) is an iterative algorithm for computing
the mean of a multivariate Gaussian distribution, or equivalently, the minimum
of a multivariate positive definite quadratic function. Sufficient conditions,
such as walk-summability, that guarantee the convergence and correctness of
GaBP are known, but GaBP may fail to converge to the correct solution given an
arbitrary positive definite quadratic function. As was observed in previous
work, the GaBP algorithm fails to converge if the computation trees produced by
the algorithm are not positive definite. In this work, we will show that the
failure modes of the GaBP algorithm can be understood via graph covers, and we
prove that a parameterized generalization of the min-sum algorithm can be used
to ensure that the computation trees remain positive definite whenever the
input matrix is positive definite. We demonstrate that the resulting algorithm
is closely related to other iterative schemes for quadratic minimization such
as the Gauss-Seidel and Jacobi algorithms. Finally, we observe, empirically,
that there always exists a choice of parameters such that the above
generalization of the GaBP algorithm converges
Applications of Metric Coinduction
Metric coinduction is a form of coinduction that can be used to establish
properties of objects constructed as a limit of finite approximations. One can
prove a coinduction step showing that some property is preserved by one step of
the approximation process, then automatically infer by the coinduction
principle that the property holds of the limit object. This can often be used
to avoid complicated analytic arguments involving limits and convergence,
replacing them with simpler algebraic arguments. This paper examines the
application of this principle in a variety of areas, including infinite
streams, Markov chains, Markov decision processes, and non-well-founded sets.
These results point to the usefulness of coinduction as a general proof
technique
Mean Shift Mask Transformer for Unseen Object Instance Segmentation
Segmenting unseen objects is a critical task in many different domains. For
example, a robot may need to grasp an unseen object, which means it needs to
visually separate this object from the background and/or other objects. Mean
shift clustering is a common method in object segmentation tasks. However, the
traditional mean shift clustering algorithm is not easily integrated into an
end-to-end neural network training pipeline. In this work, we propose the Mean
Shift Mask Transformer (MSMFormer), a new transformer architecture that
simulates the von Mises-Fisher (vMF) mean shift clustering algorithm, allowing
for the joint training and inference of both the feature extractor and the
clustering. Its central component is a hypersphere attention mechanism, which
updates object queries on a hypersphere. To illustrate the effectiveness of our
method, we apply MSMFormer to Unseen Object Instance Segmentation, which yields
a new state-of-the-art of 87.3 Boundary F-meansure on the real-world Object
Clutter Indoor Dataset (OCID). Code is available at
https://github.com/YoungSean/UnseenObjectsWithMeanShiftComment: 10 figure